Skip to main content

View Post [edit]

Poster: reviews2use Date: Dec 3, 2016 5:46pm
Forum: faqs Subject: difference between robots.txt blocking and excluded links?

Hello again!
Poking around at deletion requests in this forum, I was wondering what the difference is in blocking a site or directory with robots.txt and having a specific link excluded by request. I understand robots.txt merely prevents new captures and prevents the archive from displaying previous captures, but the site owner can remove the robot file and the archive will resume captures and displaying captures. Are links that are excluded from the archive truly deleted? Can a site owner have URLs returned to the archive if they were once excluded, and would this restore all the previous snap shots as well? Sorry if this is confusing. I'm not sure what all the correct technical terms are or how these functions work.